268 research outputs found
Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle-Pock algorithm
The primal-dual optimization algorithm developed in Chambolle and Pock (CP),
2011 is applied to various convex optimization problems of interest in computed
tomography (CT) image reconstruction. This algorithm allows for rapid
prototyping of optimization problems for the purpose of designing iterative
image reconstruction algorithms for CT. The primal-dual algorithm is briefly
summarized in the article, and its potential for prototyping is demonstrated by
explicitly deriving CP algorithm instances for many optimization problems
relevant to CT. An example application modeling breast CT with low-intensity
X-ray illumination is presented.Comment: Resubmitted to Physics in Medicine and Biology. Text has been
modified according to referee comments, and typos in the equations have been
correcte
Beam Orientation Optimization for Intensity Modulated Radiation Therapy using Adaptive l1 Minimization
Beam orientation optimization (BOO) is a key component in the process of IMRT
treatment planning. It determines to what degree one can achieve a good
treatment plan quality in the subsequent plan optimization process. In this
paper, we have developed a BOO algorithm via adaptive l_1 minimization.
Specifically, we introduce a sparsity energy function term into our model which
contains weighting factors for each beam angle adaptively adjusted during the
optimization process. Such an energy term favors small number of beam angles.
By optimizing a total energy function containing a dosimetric term and the
sparsity term, we are able to identify the unimportant beam angles and
gradually remove them without largely sacrificing the dosimetric objective. In
one typical prostate case, the convergence property of our algorithm, as well
as the how the beam angles are selected during the optimization process, is
demonstrated. Fluence map optimization (FMO) is then performed based on the
optimized beam angles. The resulted plan quality is presented and found to be
better than that obtained from unoptimized (equiangular) beam orientations. We
have further systematically validated our algorithm in the contexts of 5-9
coplanar beams for 5 prostate cases and 1 head and neck case. For each case,
the final FMO objective function value is used to compare the optimized beam
orientations and the equiangular ones. It is found that, our BOO algorithm can
lead to beam configurations which attain lower FMO objective function values
than corresponding equiangular cases, indicating the effectiveness of our BOO
algorithm.Comment: 19 pages, 2 tables, and 5 figure
A parametric level-set method for partially discrete tomography
This paper introduces a parametric level-set method for tomographic
reconstruction of partially discrete images. Such images consist of a
continuously varying background and an anomaly with a constant (known)
grey-value. We represent the geometry of the anomaly using a level-set
function, which we represent using radial basis functions. We pose the
reconstruction problem as a bi-level optimization problem in terms of the
background and coefficients for the level-set function. To constrain the
background reconstruction we impose smoothness through Tikhonov regularization.
The bi-level optimization problem is solved in an alternating fashion; in each
iteration we first reconstruct the background and consequently update the
level-set function. We test our method on numerical phantoms and show that we
can successfully reconstruct the geometry of the anomaly, even from limited
data. On these phantoms, our method outperforms Total Variation reconstruction,
DART and P-DART.Comment: Paper submitted to 20th International Conference on Discrete Geometry
for Computer Imager
GPU-based Iterative Cone Beam CT Reconstruction Using Tight Frame Regularization
X-ray imaging dose from serial cone-beam CT (CBCT) scans raises a clinical
concern in most image guided radiation therapy procedures. It is the goal of
this paper to develop a fast GPU-based algorithm to reconstruct high quality
CBCT images from undersampled and noisy projection data so as to lower the
imaging dose. For this purpose, we have developed an iterative tight frame (TF)
based CBCT reconstruction algorithm. A condition that a real CBCT image has a
sparse representation under a TF basis is imposed in the iteration process as
regularization to the solution. To speed up the computation, a multi-grid
method is employed. Our GPU implementation has achieved high computational
efficiency and a CBCT image of resolution 512\times512\times70 can be
reconstructed in ~5 min. We have tested our algorithm on a digital NCAT phantom
and a physical Catphan phantom. It is found that our TF-based algorithm is able
to reconstrct CBCT in the context of undersampling and low mAs levels. We have
also quantitatively analyzed the reconstructed CBCT image quality in terms of
modulation-transfer-function and contrast-to-noise ratio under various scanning
conditions. The results confirm the high CBCT image quality obtained from our
TF algorithm. Moreover, our algorithm has also been validated in a real
clinical context using a head-and-neck patient case. Comparisons of the
developed TF algorithm and the current state-of-the-art TV algorithm have also
been made in various cases studied in terms of reconstructed image quality and
computation efficiency.Comment: 24 pages, 8 figures, accepted by Phys. Med. Bio
A comprehensive study on the relationship between image quality and imaging dose in low-dose cone beam CT
While compressed sensing (CS) based reconstructions have been developed for
low-dose CBCT, a clear understanding on the relationship between the image
quality and imaging dose at low dose levels is needed. In this paper, we
qualitatively investigate this subject in a comprehensive manner with extensive
experimental and simulation studies. The basic idea is to plot image quality
and imaging dose together as functions of number of projections and mAs per
projection over the whole clinically relevant range. A clear understanding on
the tradeoff between image quality and dose can be achieved and optimal
low-dose CBCT scan protocols can be developed for various imaging tasks in
IGRT. Main findings of this work include: 1) Under the CS framework, image
quality has little degradation over a large dose range, and the degradation
becomes evident when the dose < 100 total mAs. A dose < 40 total mAs leads to a
dramatic image degradation. Optimal low-dose CBCT scan protocols likely fall in
the dose range of 40-100 total mAs, depending on the specific IGRT
applications. 2) Among different scan protocols at a constant low-dose level,
the super sparse-view reconstruction with projection number less than 50 is the
most challenging case, even with strong regularization. Better image quality
can be acquired with other low mAs protocols. 3) The optimal scan protocol is
the combination of a medium number of projections and a medium level of
mAs/view. This is more evident when the dose is around 72.8 total mAs or below
and when the ROI is a low-contrast or high-resolution object. Based on our
results, the optimal number of projections is around 90 to 120. 4) The
clinically acceptable lowest dose level is task dependent. In our study,
72.8mAs is a safe dose level for visualizing low-contrast objects, while 12.2
total mAs is sufficient for detecting high-contrast objects of diameter greater
than 3 mm.Comment: 19 pages, 12 figures, submitted to Physics in Medicine and Biolog
GPU-based Low Dose CT Reconstruction via Edge-preserving Total Variation Regularization
High radiation dose in CT scans increases a lifetime risk of cancer and has
become a major clinical concern. Recently, iterative reconstruction algorithms
with Total Variation (TV) regularization have been developed to reconstruct CT
images from highly undersampled data acquired at low mAs levels in order to
reduce the imaging dose. Nonetheless, TV regularization may lead to
over-smoothed images and lost edge information. To solve this problem, in this
work we develop an iterative CT reconstruction algorithm with edge-preserving
TV regularization to reconstruct CT images from highly undersampled data
obtained at low mAs levels. The CT image is reconstructed by minimizing an
energy consisting of an edge-preserving TV norm and a data fidelity term posed
by the x-ray projections. The edge-preserving TV term is proposed to
preferentially perform smoothing only on non-edge part of the image in order to
avoid over-smoothing, which is realized by introducing a penalty weight to the
original total variation norm. Our iterative algorithm is implemented on GPU to
improve its speed. We test our reconstruction algorithm on a digital NCAT
phantom, a physical chest phantom, and a Catphan phantom. Reconstruction
results from a conventional FBP algorithm and a TV regularization method
without edge preserving penalty are also presented for comparison purpose. The
experimental results illustrate that both TV-based algorithm and our
edge-preserving TV algorithm outperform the conventional FBP algorithm in
suppressing the streaking artifacts and image noise under the low dose context.
Our edge-preserving algorithm is superior to the TV-based algorithm in that it
can preserve more information of fine structures and therefore maintain
acceptable spatial resolution.Comment: 21 pages, 6 figures, 2 table
Discovery of Self-Assembling -Conjugated Peptides by Active Learning-Directed Coarse-Grained Molecular Simulation
Electronically-active organic molecules have demonstrated great promise as
novel soft materials for energy harvesting and transport. Self-assembled
nanoaggregates formed from -conjugated oligopeptides composed of an
aromatic core flanked by oligopeptide wings offer emergent optoelectronic
properties within a water soluble and biocompatible substrate. Nanoaggregate
properties can be controlled by tuning core chemistry and peptide composition,
but the sequence-structure-function relations remain poorly characterized. In
this work, we employ coarse-grained molecular dynamics simulations within an
active learning protocol employing deep representational learning and Bayesian
optimization to efficiently identify molecules capable of assembling pseudo-1D
nanoaggregates with good stacking of the electronically-active -cores. We
consider the DXXX-OPV3-XXXD oligopeptide family, where D is an Asp residue and
OPV3 is an oligophenylene vinylene oligomer (1,4-distyrylbenzene), to identify
the top performing XXX tripeptides within all 20 = 8,000 possible
sequences. By direct simulation of only 2.3% of this space, we identify
molecules predicted to exhibit superior assembly relative to those reported in
prior work. Spectral clustering of the top candidates reveals new design rules
governing assembly. This work establishes new understanding of DXXX-OPV3-XXXD
assembly, identifies promising new candidates for experimental testing, and
presents a computational design platform that can be generically extended to
other peptide-based and peptide-like systems
Data quality considerations for evaluating COVID-19 treatments using real world data: learnings from the National COVID Cohort Collaborative (N3C)
Background: Multi-institution electronic health records (EHR) are a rich source of real world data (RWD) for generating real world evidence (RWE) regarding the utilization, benefits and harms of medical interventions. They provide access to clinical data from large pooled patient populations in addition to laboratory measurements unavailable in insurance claims-based data. However, secondary use of these data for research requires specialized knowledge and careful evaluation of data quality and completeness. We discuss data quality assessments undertaken during the conduct of prep-to-research, focusing on the investigation of treatment safety and effectiveness. Methods: Using the National COVID Cohort Collaborative (N3C) enclave, we defined a patient population using criteria typical in non-interventional inpatient drug effectiveness studies. We present the challenges encountered when constructing this dataset, beginning with an examination of data quality across data partners. We then discuss the methods and best practices used to operationalize several important study elements: exposure to treatment, baseline health comorbidities, and key outcomes of interest. Results: We share our experiences and lessons learned when working with heterogeneous EHR data from over 65 healthcare institutions and 4 common data models. We discuss six key areas of data variability and quality. (1) The specific EHR data elements captured from a site can vary depending on source data model and practice. (2) Data missingness remains a significant issue. (3) Drug exposures can be recorded at different levels and may not contain route of administration or dosage information. (4) Reconstruction of continuous drug exposure intervals may not always be possible. (5) EHR discontinuity is a major concern for capturing history of prior treatment and comorbidities. Lastly, (6) access to EHR data alone limits the potential outcomes which can be used in studies. Conclusions: The creation of large scale centralized multi-site EHR databases such as N3C enables a wide range of research aimed at better understanding treatments and health impacts of many conditions including COVID-19. As with all observational research, it is important that research teams engage with appropriate domain experts to understand the data in order to define research questions that are both clinically important and feasible to address using these real world data
- …